Neural Networks and the Bias/Variance Dilemma
نویسندگان
چکیده
Feedforward neural networks trained by error backpropagation are examples of nonparametric regression estimators. We present a tutorial on nonparametric inference and its relation to neural networks, and we use the statistical viewpoint to highlight strengths and weaknesses of neural models. We illustrate the main points with some recognition experiments involving artificial data as well as handwritten numerals. In way of conclusion, we suggest that current-generation feedforward neural networks are largely inadequate for difficult problems in machine perception and machine learning, regardless of parallelversus-serial hardware or other implementation issues. Furthermore, we suggest that the fundamental challenges in neural modeling are about representation rather than learning per se. This last point is supported by additional experiments with handwritten numerals.
منابع مشابه
On the Combination of Supervised and Unsupervised Learning
The bias/variance dilemma is addressed in the context of neural networks. A bias constraint based on prior knowledge about the underlying distribution of the data is discussed as a mean for reducing the overall error measure of a classiier.
متن کاملOn Bias Plus Variance
This paper presents a Bayesian additive “correction” to the familiar quadratic loss biasplus-variance formula. It then discusses some other loss-function-specific aspects of supervised learning. It ends by presenting a version of the bias-plus-variance formula appropriate for log loss, and then the Bayesian additive correction to that formula. Both the quadratic loss and log loss correction ter...
متن کاملSolving the Bias-Variance Problem during Network Training
The following document outlines a theoretical approach to constructing an optimisation function for use in neural network training which could be used to solve the bias-variance dilemma, and thereby achieve optimal generalisation. The idea is rooted in quantitative use of probability and results in a cost function which embodies a form of orthogonal regression. A brief comment regarding biologi...
متن کاملOn Comparison of Adaptive Regularization Methods
Modeling with exible models, such as neural networks, requires careful control of the model complexity and generalization ability of the resulting model which nds expression in the ubiquitous bias-variance dilemma [4]. Regularization is a tool for optimizing the model structure reducing variance at the expense of introducing extra bias. The overall objective of adaptive regularization is to tun...
متن کاملNonparametric Representation of Neural Activity in Motor Cortex
It is well accepted that neural activity in motor cortex is correlated to hand motion, previous studies of cosine tuning curve (Georgopoulos et al.. 1982) and a modified version (Moran and Schwartz. 1999) are examples at revealing such relationships. Here by analyzing multi-electrode recordings of neural activity and simultaneously recorded hand motion during a continuous tracking task. we intr...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- Neural Computation
دوره 4 شماره
صفحات -
تاریخ انتشار 1992